Supervised self-coding in multilayered feedforward networks
نویسنده
چکیده
Supervised neural-network learning algorithms have proven very successful at solving a variety of learning problems. However, they suffer from a common problem of requiring explicit output labels. This requirement makes such algorithms implausible as biological models. In this paper, it is shown that pattern classification can be achieved, in a multilayered feedforward neural network, without requiring explicit output labels, by a process of supervised self-coding. The class projection is achieved by optimizing appropriate within-class uniformity, and between-class discernability criteria. The mapping function and the class labels are developed together, iteratively using the derived self-coding backpropagation algorithm. The ability of the self-coding network to generalize on unseen data is also experimentally evaluated on real data sets, and compares favorably with the traditional labeled supervision with neural networks. However, interesting features emerge out of the proposed self-coding supervision, which are absent in conventional approaches. The further implications of supervised self-coding with neural networks are also discussed.
منابع مشابه
Credit Scoring Using Supervised and Unsupervised Neural Networks
Some of the concerns that plague developers of neural network decision support systems include: (a) How do I understand the underlying structure of the problem domain; (b) How can I discover unknown imperfections in the data which might detract from the generalization accuracy of the neural network model; and (c) What variables should I include to obtain the best generalization properties in th...
متن کاملEntropic Analysis and Incremental Synthesis of Multilayered Feedforward Neural Networks
Neural network architecture optimization is often a critical issue, particularly when VLSI implementation is considered. This paper proposes a new minimization method for multilayered feedforward ANNs and an original approach to their synthesis, both based on the analysis of the information quantity (entropy) flowing through the network. A layer is described as an information filter which selec...
متن کاملTest Bed for Multilayered Feed forward Neural Network Architectures as Bidirectional Associative Memory
Multilayered feed-forward neural networks are considered universal approximators and hence extensively been used for function approximation. Function approximation is an instance of supervised learning which is one of the most studied topics in machine learning, artificial neural networks, pattern recognition, and statistical curve fitting. Bidirectional associative memory is another class of n...
متن کاملBiologically Inspired Feedforward Supervised Learning for Deep Self-Organizing Map Networks
In this study, we propose a novel deep neural network and its supervised learning method that uses a feedforward supervisory signal. The method is inspired by the human visual system and performs human-like association-based learning without any backward error propagation. The feedforward supervisory signal that produces the correct result is preceded by the target signal and associates its con...
متن کاملSupervised learning based on temporal coding in spiking neural networks
Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- IEEE transactions on neural networks
دوره 7 5 شماره
صفحات -
تاریخ انتشار 1996